Deep transform and metric learning network: Wedding deep dictionary learning and neural network
نویسندگان
چکیده
On account of its many successes in inference tasks and imaging applications, Dictionary Learning (DL) related sparse optimization problems have garnered a lot research interest. In DL area, most solutions are focused on single-layer dictionaries, whose reliance handcrafted features achieves somewhat limited performance. With the rapid development deep learning, improved methods called Deep (DDL), been recently proposed an end-to-end flexible solution with much higher The DDL techniques have, however, also fallen short number issues, namely, computational cost difficulties gradient updating initialization. While few differential programming to speed-up DL, none them could ensure efficient, scalable, robust for methods. To that end, we propose herein, novel differentiable approach, which yields competitive reliable solution. method jointly learns transforms metrics, where each layer is theoretically reformulated as combination one linear Recurrent Neural Network (RNN). RNN shown flexibly layer-associated approximation together learnable metric. Additionally, our work unveils new insights into (NN) DDL, bridging combinations layers Extensive experiments image classification carried out demonstrate can not only outperform existing several counts including, efficiency, scaling discrimination, but achieve better accuracy increased robustness against adversarial perturbations than CNNs.
منابع مشابه
Deep Dictionary Learning: A PARametric NETwork Approach
Deep dictionary learning seeks multiple dictionaries at different image scales to capture complementary coherent characteristics. We propose a method for learning a hierarchy of synthesis dictionaries with an image classification goal. The dictionaries and classification parameters are trained by a classification objective, and the sparse features are extracted by reducing a reconstruction loss...
متن کاملHow to Train Your Deep Neural Network with Dictionary Learning
Currently there are two predominant ways to train deep neural networks. The first one uses restricted Boltzmann machine (RBM) and the second one autoencoders. RBMs are stacked in layers to form deep belief network (DBN); the final representation layer is attached to the target to complete the deep neural network. Autoencoders are nested one inside the other to form stacked autoencoders; once th...
متن کاملDeep Metric Learning Using Triplet Network
Deep learning has proven itself as a successful set of models for learning useful semantic representations of data. These, however, are mostly implicitly learned as part of a classification task. In this paper we propose the triplet network model, which aims to learn useful representations by distance comparisons. A similar model was defined by Wang et al. (2014), tailor made for learning a ran...
متن کاملShort term electric load prediction based on deep neural network and wavelet transform and input selection
Electricity demand forecasting is one of the most important factors in the planning, design, and operation of competitive electrical systems. However, most of the load forecasting methods are not accurate. Therefore, in order to increase the accuracy of the short-term electrical load forecast, this paper proposes a hybrid method for predicting electric load based on a deep neural network with a...
متن کاملCompressed Learning: A Deep Neural Network Approach
This work presents a novel deep learning approach to Compressed-Learning. Jointly optimizing the sensing and inference operators. Outperforming state-of-the-art CL methods on MNIST and CIFAR10. Extendible to numerous CL applications. The research leading to these results has received funding from the European Research Council under European Union's Seventh Framework Program, ERC Grant agre...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Neurocomputing
سال: 2022
ISSN: ['0925-2312', '1872-8286']
DOI: https://doi.org/10.1016/j.neucom.2022.08.069